Search results for "Adaptive Markov chain Monte Carlo"

showing 6 items of 6 documents

On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic

2015

We develop a practical approach to establish the stability, that is, the recurrence in a given set, of a large class of controlled Markov chains. These processes arise in various areas of applied science and encompass important numerical methods. We show in particular how individual Lyapunov functions and associated drift conditions for the parametrized family of Markov transition probabilities and the parameter update can be combined to form Lyapunov functions for the joint process, leading to the proof of the desired stability property. Of particular interest is the fact that the approach applies even in situations where the two components of the process present a time-scale separation, w…

65C05FOS: Computer and information sciencesStatistics and ProbabilityLyapunov functionStability (learning theory)Markov processContext (language use)Mathematics - Statistics Theorycontrolled Markov chainsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesMethodology (stat.ME)010104 statistics & probabilitysymbols.namesake60J05stochastic approximationFOS: MathematicsComputational statisticsApplied mathematics60J220101 mathematicsStatistics - MethodologyMathematicsSequenceMarkov chain010102 general mathematicsStability Markov chainssymbolsStatistics Probability and Uncertaintyadaptive Markov chain Monte Carlo
researchProduct

Adaptive independent sticky MCMC algorithms

2018

In this work, we introduce a novel class of adaptive Monte Carlo methods, called adaptive independent sticky MCMC algorithms, for efficient sampling from a generic target probability density function (pdf). The new class of algorithms employs adaptive non-parametric proposal densities which become closer and closer to the target as the number of iterations increases. The proposal pdf is built using interpolation procedures based on a set of support points which is constructed iteratively based on previously drawn samples. The algorithm's efficiency is ensured by a test that controls the evolution of the set of support points. This extra stage controls the computational cost and the converge…

FOS: Computer and information sciencesMathematical optimizationAdaptive Markov chain Monte Carlo (MCMC)Monte Carlo methodBayesian inferenceHASettore SECS-P/05 - Econometrialcsh:TK7800-8360Machine Learning (stat.ML)02 engineering and technologyBayesian inference01 natural sciencesStatistics - Computationlcsh:Telecommunication010104 statistics & probabilitysymbols.namesakeAdaptive Markov chain Monte Carlo (MCMC); Adaptive rejection Metropolis sampling (ARMS); Bayesian inference; Gibbs sampling; Hit and run algorithm; Metropolis-within-Gibbs; Monte Carlo methods; Signal Processing; Hardware and Architecture; Electrical and Electronic EngineeringGibbs samplingStatistics - Machine Learninglcsh:TK5101-67200202 electrical engineering electronic engineering information engineeringComputational statisticsMetropolis-within-GibbsHit and run algorithm0101 mathematicsElectrical and Electronic EngineeringGaussian processComputation (stat.CO)MathematicsSignal processinglcsh:Electronics020206 networking & telecommunicationsMarkov chain Monte CarloMonte Carlo methodsHardware and ArchitectureSignal ProcessingSettore SECS-S/03 - Statistica EconomicasymbolsSettore SECS-S/01 - StatisticaStatistical signal processingGibbs samplingAdaptive rejection Metropolis sampling (ARMS)EURASIP Journal on Advances in Signal Processing
researchProduct

Conditional particle filters with diffuse initial distributions

2020

Conditional particle filters (CPFs) are powerful smoothing algorithms for general nonlinear/non-Gaussian hidden Markov models. However, CPFs can be inefficient or difficult to apply with diffuse initial distributions, which are common in statistical applications. We propose a simple but generally applicable auxiliary variable method, which can be used together with the CPF in order to perform efficient inference with diffuse initial distributions. The method only requires simulatable Markov transitions that are reversible with respect to the initial distribution, which can be improper. We focus in particular on random-walk type transitions which are reversible with respect to a uniform init…

FOS: Computer and information sciencesStatistics and ProbabilityComputer scienceGaussianBayesian inferenceMarkovin ketjut02 engineering and technology01 natural sciencesStatistics - ComputationArticleTheoretical Computer ScienceMethodology (stat.ME)010104 statistics & probabilitysymbols.namesakeAdaptive Markov chain Monte Carlotilastotiede0202 electrical engineering electronic engineering information engineeringStatistical physics0101 mathematicsDiffuse initialisationHidden Markov modelComputation (stat.CO)Statistics - MethodologyState space modelHidden Markov modelbayesian inferenceMarkov chaindiffuse initialisationbayesilainen menetelmäconditional particle filtersmoothingmatemaattiset menetelmät020206 networking & telecommunicationsConditional particle filterCovariancecompartment modelRandom walkCompartment modelstate space modelComputational Theory and MathematicsAutoregressive modelsymbolsStatistics Probability and UncertaintyParticle filterSmoothingSmoothing
researchProduct

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

Statistics and ProbabilityFOS: Computer and information sciencesIdentity matrixMathematics - Statistics TheoryStatistics Theory (math.ST)Upper and lower boundsStatistics - Computation93E3593E15Combinatorics60J27Mathematics::ProbabilityLaw of large numbers65C40 60J27 93E15 93E35stochastic approximationFOS: MathematicsEigenvalues and eigenvectorsComputation (stat.CO)Metropolis algorithmMathematicsProbability (math.PR)Zero (complex analysis)CovariancestabilityUniform continuityBounded function65C40Statistics Probability and Uncertaintyadaptive Markov chain Monte CarloMathematics - Probability
researchProduct

On the stability and ergodicity of adaptive scaling Metropolis algorithms

2011

The stability and ergodicity properties of two adaptive random walk Metropolis algorithms are considered. The both algorithms adjust the scaling of the proposal distribution continuously based on the observed acceptance probability. Unlike the previously proposed forms of the algorithms, the adapted scaling parameter is not constrained within a predefined compact interval. The first algorithm is based on scale adaptation only, while the second one incorporates also covariance adaptation. A strong law of large numbers is shown to hold assuming that the target density is smooth enough and has either compact support or super-exponentially decaying tails.

Statistics and ProbabilityStochastic approximationMathematics - Statistics TheoryStatistics Theory (math.ST)Law of large numbersMultiple-try Metropolis01 natural sciencesStability (probability)010104 statistics & probabilityModelling and Simulation65C40 60J27 93E15 93E35Adaptive Markov chain Monte CarloFOS: Mathematics0101 mathematicsScalingMetropolis algorithmMathematicsta112Applied Mathematics010102 general mathematicsRejection samplingErgodicityProbability (math.PR)ta111CovarianceRandom walkMetropolis–Hastings algorithmModeling and SimulationAlgorithmStabilityMathematics - ProbabilityStochastic Processes and their Applications
researchProduct

Can the adaptive Metropolis algorithm collapse without the covariance lower bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

stabiiliusMetropolis-algoritmiAdaptive Markov chain Monte Carlostochastic approximationstokastinen approksimaatiostabilityadaptiivinen Markov chain Monte CarloMetropolis algorithm
researchProduct